gh-144342: Use time.sleep in profiling.sampling#144343
Conversation
|
Good catch!
The problem i think was that with very small sample frequencies (microseconds) sleep was sleeping for more than the requested sampling rate and it was skewing the results and causing 'slow samples'. I think in some of the refactors the |
|
Thanks a lot for the investigation, the issue and the fix! Great work ! |
|
At one time, a decade or so ago, we were told that the best resolution of time.sleep on Windows was much larger than a millisecond. To test it now, I whipped up the following and ran it under IDLE. The minimum delta was .00101 and the max under .00105 with all or nearly all under .00102. I did not try to understand the details of the patch but I expect it should work on Windows also. |
The PR reduces the high CPU usage of
profiling.sampling, from 99%, without having any impact on the sampling quality or frequency.The trick is to just use
time.sleep.Am I missing something obvious here?
I'm sure there are faster ways to do this (eg:
timerfd), but not sure if worth, given complexity / decreased maintainability.Test
Linux (6.12.63+deb13-amd64, Debian trixie)
After
6% CPU:
Before
99% CPU:
macOS (Tahoe 26.2)
After
Before
Test scripts
test.sh
foof.py
profiling.sampling#144342